本文提出了一种通过深层插件(PNP)方法恢复数字视频的新方法。在贝叶斯形式主义下,该方法包括在交替的优化方案中使用深度卷积的降级网络代替先前的近端操作员。我们通过直接应用该方法来恢复降级视频观察结果的数字视频,从而将自己与先前的PNP工作区分开来。这样,可以将经过验证训练的网络重新用于其他视频修复任务。我们在视频脱张,超分辨率和随机缺失像素的插值方面的实验都显示出明显的好处,因为它使用专门为视频denoising设计的网络,因为它可以产生更好的恢复性能和更好的时间稳定性。使用相同的PNP公式。此外,我们的方法比较比较在序列的每个帧上分别应用不同的最新PNP方案。这在视频修复领域打开了新的观点。
translated by 谷歌翻译
Image super-resolution is a one-to-many problem, but most deep-learning based methods only provide one single solution to this problem. In this work, we tackle the problem of diverse super-resolution by reusing VD-VAE, a state-of-the art variational autoencoder (VAE). We find that the hierarchical latent representation learned by VD-VAE naturally separates the image low-frequency information, encoded in the latent groups at the top of the hierarchy, from the image high-frequency details, determined by the latent groups at the bottom of the latent hierarchy. Starting from this observation, we design a super-resolution model exploiting the specific structure of VD-VAE latent space. Specifically, we train an encoder to encode low-resolution images in the subset of VD-VAE latent space encoding the low-frequency information, and we combine this encoder with VD-VAE generative model to sample diverse super-resolved version of a low-resolution input. We demonstrate the ability of our method to generate diverse solutions to the super-resolution problem on face super-resolution with upsampling factors x4, x8, and x16.
translated by 谷歌翻译
自Venkatakrishnan等人的开创性工作以来。 2013年,即插即用(PNP)方法在贝叶斯成像中变得普遍存在。这些方法通过将显式似然函数与预定由图像去噪算法隐式定义的明确定义,导出用于成像中的逆问题的最小均方误差(MMSE)或最大后验误差(MAP)估计器。文献中提出的PNP算法主要不同于他们用于优化或采样的迭代方案。在优化方案的情况下,一些最近的作品能够保证收敛到一个定点,尽管不一定是地图估计。在采样方案的情况下,据我们所知,没有已知的收敛证明。关于潜在的贝叶斯模型和估算器是否具有明确定义,良好的良好,并且具有支持这些数值方案所需的基本规律性属性,还存在重要的开放性问题。为了解决这些限制,本文开发了用于对PNP前锋进行贝叶斯推断的理论,方法和可忽略的会聚算法。我们介绍了两个算法:1)PNP-ULA(未调整的Langevin算法),用于蒙特卡罗采样和MMSE推断; 2)PNP-SGD(随机梯度下降)用于MAP推理。利用Markov链的定量融合的最新结果,我们为这两种算法建立了详细的收敛保证,在现实假设下,在去噪运营商使用的现实假设下,特别注意基于深神经网络的遣散者。我们还表明这些算法大致瞄准了良好的决策理论上最佳的贝叶斯模型。所提出的算法在几种规范问题上证明了诸如图像去纹,染色和去噪,其中它们用于点估计以及不确定的可视化和量化。
translated by 谷歌翻译
We introduce hp-greedy, a refinement approach for building gravitational wave surrogates as an extension of the standard reduced basis framework. Our proposal is data-driven, with a domain decomposition of the parameter space, local reduced basis, and a binary tree as the resulting structure, which are obtained in an automated way. When compared to the standard global reduced basis approach, the numerical simulations of our proposal show three salient features: i) representations of lower dimension with no loss of accuracy, ii) a significantly higher accuracy for a fixed maximum dimensionality of the basis, in some cases by orders of magnitude, and iii) results that depend on the reduced basis seed choice used by the refinement algorithm. We first illustrate the key parts of our approach with a toy model and then present a more realistic use case of gravitational waves emitted by the collision of two spinning, non-precessing black holes. We discuss performance aspects of hp-greedy, such as overfitting with respect to the depth of the tree structure, and other hyperparameter dependences. As two direct applications of the proposed hp-greedy refinement, we envision: i) a further acceleration of statistical inference, which might be complementary to focused reduced-order quadratures, and ii) the search of gravitational waves through clustering and nearest neighbors.
translated by 谷歌翻译
Modern machine learning pipelines are limited due to data availability, storage quotas, privacy regulations, and expensive annotation processes. These constraints make it difficult or impossible to maintain a large-scale model trained on growing annotation sets. Continual learning directly approaches this problem, with the ultimate goal of devising methods where a neural network effectively learns relevant patterns for new (unseen) classes without significantly altering its performance on previously learned ones. In this paper, we address the problem of continual learning for video data. We introduce PIVOT, a novel method that leverages the extensive knowledge in pre-trained models from the image domain, thereby reducing the number of trainable parameters and the associated forgetting. Unlike previous methods, ours is the first approach that effectively uses prompting mechanisms for continual learning without any in-domain pre-training. Our experiments show that PIVOT improves state-of-the-art methods by a significant 27% on the 20-task ActivityNet setup.
translated by 谷歌翻译
The costs and impacts of government corruption range from impairing a country's economic growth to affecting its citizens' well-being and safety. Public contracting between government dependencies and private sector instances, referred to as public procurement, is a fertile land of opportunity for corrupt practices, generating substantial monetary losses worldwide. Thus, identifying and deterring corrupt activities between the government and the private sector is paramount. However, due to several factors, corruption in public procurement is challenging to identify and track, leading to corrupt practices going unnoticed. This paper proposes a machine learning model based on an ensemble of random forest classifiers, which we call hyper-forest, to identify and predict corrupt contracts in M\'exico's public procurement data. This method's results correctly detect most of the corrupt and non-corrupt contracts evaluated in the dataset. Furthermore, we found that the most critical predictors considered in the model are those related to the relationship between buyers and suppliers rather than those related to features of individual contracts. Also, the method proposed here is general enough to be trained with data from other countries. Overall, our work presents a tool that can help in the decision-making process to identify, predict and analyze corruption in public procurement contracts.
translated by 谷歌翻译
Quality management and assurance is key for space agencies to guarantee the success of space missions, which are high-risk and extremely costly. In this paper, we present a system to generate quizzes, a common resource to evaluate the effectiveness of training sessions, from documents about quality assurance procedures in the Space domain. Our system leverages state of the art auto-regressive models like T5 and BART to generate questions, and a RoBERTa model to extract answers for such questions, thus verifying their suitability.
translated by 谷歌翻译
We present SpaceQA, to the best of our knowledge the first open-domain QA system in Space mission design. SpaceQA is part of an initiative by the European Space Agency (ESA) to facilitate the access, sharing and reuse of information about Space mission design within the agency and with the public. We adopt a state-of-the-art architecture consisting of a dense retriever and a neural reader and opt for an approach based on transfer learning rather than fine-tuning due to the lack of domain-specific annotated data. Our evaluation on a test set produced by ESA is largely consistent with the results originally reported by the evaluated retrievers and confirms the need of fine tuning for reading comprehension. As of writing this paper, ESA is piloting SpaceQA internally.
translated by 谷歌翻译
本文提出了2022年访问量的挑战的最终结果。 OOV竞赛介绍了一个重要方面,而光学角色识别(OCR)模型通常不会研究,即,在培训时对看不见的场景文本实例的识别。竞赛编制了包含326,385张图像的公共场景文本数据集的集合,其中包含4,864,405个场景文本实例,从而涵盖了广泛的数据分布。形成了一个新的独立验证和测试集,其中包括在训练时出词汇量不超出词汇的场景文本实例。竞争是在两项任务中进行的,分别是端到端和裁剪的文本识别。介绍了基线和不同参与者的结果的详尽分析。有趣的是,在新研究的设置下,当前的最新模型显示出显着的性能差距。我们得出的结论是,在此挑战中提出的OOV数据集将是要探索的重要领域,以开发场景文本模型,以实现更健壮和广义的预测。
translated by 谷歌翻译
高参数优化(HPO)是一个良好的研究领域。但是,HPO管道中组件的效果和相互作用尚未得到很好的研究。然后,我们问自己:HPO的景观是否会被用于评估单个配置的管道偏见吗?为了解决这个问题,我们建议使用健身景观分析分析HPO管道对HPO问题的影响。特别是,我们研究了DS-2019 HPO基准数据集,寻找可能表明评估管道故障的模式,并将其与HPO性能联系起来。我们的主要发现是:(i)在大多数情况下,大量不同的超参数(即多种配置)产生相同的不良绩效,很可能与多数类预测模型有关; (ii)在这些情况下,观察到观察到的健康和平均健身之间存在恶化的相关性,可能会使基于本地搜索的HPO策略的部署更加困难。最后,我们得出的结论是,HPO管道定义可能会对HPO景观产生负面影响。
translated by 谷歌翻译